perm filename QLISP[E86,JMC] blob sn#824327 filedate 1986-09-15 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	qlisp[e86,jmc]	section qlisp paper with rpg
C00008 ENDMK
CāŠ—;
qlisp[e86,jmc]	section qlisp paper with rpg

Efficient QLISP AND parallelism

	The most straightforward possibilities for achieving effective
parallelism occur when only AND-parallelism is required.  However,
since LISP programming is recursive, there is still the problem of
avoiding too much parallelism.  For example the SUBST program of
section xxx will generate parallelism in computing the arguments
of a single CONS.  It is straightforward to avoid excessive
parallelism if the program can tell how big a computation will be.
If it is going to be too small, the QLET propositional parameter
can be set to be () or, better yet, control can be given to a version
of the program that has no parallelism.  In general, it seems to us
that the efficiency won't be very sensitive to how the parallelism
is arranged provided that the number of times parallel bookkeeping
is done is kept down, and this will be true if parallelism is
avoided for small computations.
The following are three ways of doing this.

	1. It may be immediate how big the computation is.
Consider computing $n!$.  In order to do the computation
recursively, we generalize it to computing the product
of the numbers from $m$ to $n$.  If $|n-m|$ is small enough,
we can call a function that multiplies them non-recursively.
Otherwise, we split the range in half and compute the products
of the halves in parallel.  This immediately generalizes to
computing any commutative operation applied to elements indexed
by the numbers from $m$ to $n$.  It's easy, because $|n-m|$
provides an estimate of the size of the problem that can
be used to determine whether parallelism is worthwhile.

	In this example it might be better to put the parallelism in the
BIGNUM multiplication rather than in the overall structure.  The point is
that it is immediate here whether the computation is too small to do in
parallel.

	2. Another case that can be picked off occurs when the
size of the computation can be computed by a linear scan of
on of the arguments of the function, but the time required to
compute the function is much larger than the time taken by
the scan.  In that case, we don't waste much by computing
the size of the computation in advance, and deciding about
parallelism in on the basis of the result.

	From this point of view, SUBST presents the most
difficult case.  The time required to determine the size of
the computation is as large as the time required to do the
computation.  In that case one might as well gamble on
parallelism if the program is really going to have to do
SUBSTs with no advance estimate of the size of the expressions.
This might occur, for example, if we have to provide a library
program for doing SUBST in QLISP.

	However, if an operation of substitution or something
of similar characteristics is going to occur frequently in a
program, it may be worthwhile to include in a data structure
an estimate of its size.  The extreme would be to use a modified
CONS that includes a size as well as the two pointers.  We
suspect that this is rarely worthwhile and that it will be
more common to use data structures that contain size estimates
more sparingly.  For example, algebraic expressions may include
size estimates without including them in every CONS.

Exercise: Suppose we have sums and products built up from constants
and variables.  Devise a data structure that includes sizes
and write a partial differentiation program that produces such
sizes for the derivative and its subexpressions.

Exercise (harder): Do the same for a simplification program or
a partial differentiation program that does simplification.